799 research outputs found

    What Can Information Encapsulation Tell Us About Emotional Rationality?

    Get PDF
    What can features of cognitive architecture, e.g. the information encapsulation of certain emotion processing systems, tell us about emotional rationality? de Sousa proposes the following hypothesis: “the role of emotions is to supply the insufficiency of reason by imitating the encapsulation of perceptual modes” (de Sousa 1987: 195). Very roughly, emotion processing can sometimes occur in a way that is insensitive to what an agent already knows, and such processing can assist reasoning by restricting the response-options she considers. This paper aims to provide an exposition and assessment of de Sousa’s hypothesis. I argue information encapsulation is not essential to emotion-driven reasoning, as emotions can determine the relevance of response-options even without being encapsulated. However, I argue encapsulation can still play a role in assisting reasoning by restricting response-options more efficiently, and in a way that ensures which options emotions deem relevant are not overridden by what the agent knows. I end by briefly explaining why this very feature also helps explain how emotions can, on occasion, hinder reasoning

    Solving Tree Problems with Category Theory

    Full text link
    Artificial Intelligence (AI) has long pursued models, theories, and techniques to imbue machines with human-like general intelligence. Yet even the currently predominant data-driven approaches in AI seem to be lacking humans' unique ability to solve wide ranges of problems. This situation begs the question of the existence of principles that underlie general problem-solving capabilities. We approach this question through the mathematical formulation of analogies across different problems and solutions. We focus in particular on problems that could be represented as tree-like structures. Most importantly, we adopt a category-theoretic approach in formalising tree problems as categories, and in proving the existence of equivalences across apparently unrelated problem domains. We prove the existence of a functor between the category of tree problems and the category of solutions. We also provide a weaker version of the functor by quantifying equivalences of problem categories using a metric on tree problems.Comment: 10 pages, 4 figures, International Conference on Artificial General Intelligence (AGI) 201

    The systematicity challenge to anti-representational dynamicism

    Get PDF
    After more than twenty years of representational debate in the cognitive sciences, anti-representational dynamicism may be seen as offering a rival and radically new kind of explanation of systematicity phenomena. In this paper, I argue that, on the contrary, anti-representational dynamicism must face a version of the old systematicity challenge: either it does not explain systematicity, or else, it is just an implementation of representational theories. To show this, I present a purely behavioral and representation-free account of systematicity. I then consider a case of insect sensorimotor systematic behavior: communicating behavior in honey bees. I conclude that anti-representational dynamicism fails to capture the fundamental trait of systematic behaviors qua systematic, i.e., their involving exercises of the same behavioral capacities. I suggest, finally, a collaborative strategy in pursuit of a rich and powerful account of this central phenomenon of high cognition at all levels of explanation, including the representational level

    Categorial Compositionality III: F-(co)algebras and the Systematicity of Recursive Capacities in Human Cognition

    Get PDF
    Human cognitive capacity includes recursively definable concepts, which are prevalent in domains involving lists, numbers, and languages. Cognitive science currently lacks a satisfactory explanation for the systematic nature of such capacities (i.e., why the capacity for some recursive cognitive abilities–e.g., finding the smallest number in a list–implies the capacity for certain others–finding the largest number, given knowledge of number order). The category-theoretic constructs of initial F-algebra, catamorphism, and their duals, final coalgebra and anamorphism provide a formal, systematic treatment of recursion in computer science. Here, we use this formalism to explain the systematicity of recursive cognitive capacities without ad hoc assumptions (i.e., to the same explanatory standard used in our account of systematicity for non-recursive capacities). The presence of an initial algebra/final coalgebra explains systematicity because all recursive cognitive capacities, in the domain of interest, factor through (are composed of) the same component process. Moreover, this factorization is unique, hence no further (ad hoc) assumptions are required to establish the intrinsic connection between members of a group of systematically-related capacities. This formulation also provides a new perspective on the relationship between recursive cognitive capacities. In particular, the link between number and language does not depend on recursion, as such, but on the underlying functor on which the group of recursive capacities is based. Thus, many species (and infants) can employ recursive processes without having a full-blown capacity for number and language

    Categorial Compositionality: A Category Theory Explanation for the Systematicity of Human Cognition

    Get PDF
    Classical and Connectionist theories of cognitive architecture seek to explain systematicity (i.e., the property of human cognition whereby cognitive capacity comes in groups of related behaviours) as a consequence of syntactically and functionally compositional representations, respectively. However, both theories depend on ad hoc assumptions to exclude specific instances of these forms of compositionality (e.g. grammars, networks) that do not account for systematicity. By analogy with the Ptolemaic (i.e. geocentric) theory of planetary motion, although either theory can be made to be consistent with the data, both nonetheless fail to fully explain it. Category theory, a branch of mathematics, provides an alternative explanation based on the formal concept of adjunction, which relates a pair of structure-preserving maps, called functors. A functor generalizes the notion of a map between representational states to include a map between state transformations (or processes). In a formal sense, systematicity is a necessary consequence of a higher-order theory of cognitive architecture, in contrast to the first-order theories derived from Classicism or Connectionism. Category theory offers a re-conceptualization for cognitive science, analogous to the one that Copernicus provided for astronomy, where representational states are no longer the center of the cognitive universe—replaced by the relationships between the maps that transform them

    Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach

    Full text link
    Explanations--a form of post-hoc interpretability--play an instrumental role in making systems accessible as AI continues to proliferate complex and sensitive sociotechnical systems. In this paper, we introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design. It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems. In particular, we advocate for a reflective sociotechnical approach. We illustrate HCXAI through a case study of an explanation system for non-technical end-users that shows how technical advancements and the understanding of human factors co-evolve. Building on the case study, we lay out open research questions pertaining to further refining our understanding of "who" the human is and extending beyond 1-to-1 human-computer interactions. Finally, we propose that a reflective HCXAI paradigm-mediated through the perspective of Critical Technical Practice and supplemented with strategies from HCI, such as value-sensitive design and participatory design--not only helps us understand our intellectual blind spots, but it can also open up new design and research spaces.Comment: In Proceedings of HCI International 2020: 22nd International Conference On Human-Computer Interactio

    Humans store about 1.5 megabytes of information during language acquisition

    Get PDF
    We introduce theory-neutral estimates of the amount of information learners possess about how language works. We provide estimates at several levels of linguistic analysis: phonemes, wordforms, lexical semantics, word frequency and syntax. Our best guess is that the average English-speaking adult has learned 12.5 million bits of information, the majority of which is lexical semantics. Interestingly, very little of this information is syntactic, even in our upper bound analyses. Generally, our results suggest that learners possess remarkable inferential mechanisms capable of extracting, on average, nearly 2000 bits of information about how language works each day for 18 years
    corecore